287 research outputs found

    Distributed Adaptive Nearest Neighbor Classifier: Algorithm and Theory

    Full text link
    When data is of an extraordinarily large size or physically stored in different locations, the distributed nearest neighbor (NN) classifier is an attractive tool for classification. We propose a novel distributed adaptive NN classifier for which the number of nearest neighbors is a tuning parameter stochastically chosen by a data-driven criterion. An early stopping rule is proposed when searching for the optimal tuning parameter, which not only speeds up the computation but also improves the finite sample performance of the proposed Algorithm. Convergence rate of excess risk of the distributed adaptive NN classifier is investigated under various sub-sample size compositions. In particular, we show that when the sub-sample sizes are sufficiently large, the proposed classifier achieves the nearly optimal convergence rate. Effectiveness of the proposed approach is demonstrated through simulation studies as well as an empirical application to a real-world dataset

    Zhodnocení finanční pozice společnosti Nestlé

    Get PDF
    This thesis is manly about valuation of Financial Position of Nestlé Company.The objective of this thesis is to analyze the financial performance and situation of Nestlé Company from different kinds of aspects.Tato diplomová práce se zabývá problematikou oceňování finanční pozice Nestlé. Cílem práce je analyzovat finanční situaci a situaci společnosti Nestlé z různých aspektů.154 - Katedra financívelmi dobř

    Simple spatial scaling rules behind complex cities

    Get PDF
    Although most of wealth and innovation have been the result of human interaction and cooperation, we are not yet able to quantitatively predict the spatial distributions of three main elements of cities: population, roads, and socioeconomic interactions. By a simple model mainly based on spatial attraction and matching growth mechanisms, we reveal that the spatial scaling rules of these three elements are in a consistent framework, which allows us to use any single observation to infer the others. All numerical and theoretical results are consistent with empirical data from ten representative cities. In addition, our model can also provide a general explanation of the origins of the universal super- and sub-linear aggregate scaling laws and accurately predict kilometre-level socioeconomic activity. Our work opens a new avenue for uncovering the evolution of cities in terms of the interplay among urban elements, and it has a broad range of applications.This work is supported by the National Natural Science Foundation of China under Grant Nos. 61673070, 61773069, 71731002 and the Fundamental Research Funds for the Central Universities with the Grant No. 2015KJJCB13, and also partially supported by NSF Grants PHY-1505000, CMMI-1125290, CHE-1213217, DTRA Grant HDTRA1-14-1-0017, DOE Grant DE-AC07-05Id14517. J.Z. acknowledges discussions with Prof. Bettencourt of the Santa Fe Institute, Dr. Lingfei Wu of Arizona State University, and Profs. Yougui Wang and Qinghua Chen of Beijing Normal University. R.L. acknowledges helpful discussions with and comments from Dr. Remi Louf in CASA, University College London, Dr. Longfeng Zhao from Huazhong (Central China) Normal University, and selfless help from Prof. Yougui Wang. R.L. is also supported by the Chinese Scholarship Council. (61673070 - National Natural Science Foundation of China; 61773069 - National Natural Science Foundation of China; 71731002 - National Natural Science Foundation of China; 2015KJJCB13 - Fundamental Research Funds for the Central Universities; PHY-1505000 - NSF; CMMI-1125290 - NSF; CHE-1213217 - NSF; HDTRA1-14-1-0017 - DTRA Grant; DE-AC07-05Id14517 - DOE; Chinese Scholarship Council)Published versio

    Specializing Small Language Models towards Complex Style Transfer via Latent Attribute Pre-Training

    Full text link
    In this work, we introduce the concept of complex text style transfer tasks, and constructed complex text datasets based on two widely applicable scenarios. Our dataset is the first large-scale data set of its kind, with 700 rephrased sentences and 1,000 sentences from the game Genshin Impact. While large language models (LLM) have shown promise in complex text style transfer, they have drawbacks such as data privacy concerns, network instability, and high deployment costs. To address these issues, we explore the effectiveness of small models (less than T5-3B) with implicit style pre-training through contrastive learning. We also propose a method for automated evaluation of text generation quality based on alignment with human evaluations using ChatGPT. Finally, we compare our approach with existing methods and show that our model achieves state-of-art performances of few-shot text style transfer models

    Non-Hermitian topological exciton-polariton corner modes

    Full text link
    We theoretically study two-dimensional exciton-polariton lattices and predict that non-Hermitian topological corner modes can be formed under non-resonant pumping. As a generalization of the non-Hermitian skin effect, all eigenstates are localized at the two corners in our model. This is also a higher dimensional topology compared to other proposals in exciton-polariton systems and we find that it allows propagating signals in the bulk of the system to travel around defects, which is not possible in one-dimensional topological lattices or two-dimensional lattices with Hermitian edge states. Furthermore, as all polariton states are localized away from an excitation spot, the system offers an opportunity for more accurate measurement of the polariton-polariton interaction strength as the pump-induced exciton-reservoir is spatially separated from all polariton states

    E2Net: Resource-Efficient Continual Learning with Elastic Expansion Network

    Full text link
    Continual Learning methods are designed to learn new tasks without erasing previous knowledge. However, Continual Learning often requires massive computational power and storage capacity for satisfactory performance. In this paper, we propose a resource-efficient continual learning method called the Elastic Expansion Network (E2Net). Leveraging core subnet distillation and precise replay sample selection, E2Net achieves superior average accuracy and diminished forgetting within the same computational and storage constraints, all while minimizing processing time. In E2Net, we propose Representative Network Distillation to identify the representative core subnet by assessing parameter quantity and output similarity with the working network, distilling analogous subnets within the working network to mitigate reliance on rehearsal buffers and facilitating knowledge transfer across previous tasks. To enhance storage resource utilization, we then propose Subnet Constraint Experience Replay to optimize rehearsal efficiency through a sample storage strategy based on the structures of representative networks. Extensive experiments conducted predominantly on cloud environments with diverse datasets and also spanning the edge environment demonstrate that E2Net consistently outperforms state-of-the-art methods. In addition, our method outperforms competitors in terms of both storage and computational requirements
    corecore